75 research outputs found

    A Method for Finding Structured Sparse Solutions to Non-negative Least Squares Problems with Applications

    Full text link
    Demixing problems in many areas such as hyperspectral imaging and differential optical absorption spectroscopy (DOAS) often require finding sparse nonnegative linear combinations of dictionary elements that match observed data. We show how aspects of these problems, such as misalignment of DOAS references and uncertainty in hyperspectral endmembers, can be modeled by expanding the dictionary with grouped elements and imposing a structured sparsity assumption that the combinations within each group should be sparse or even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain good solutions using convex or greedy methods, such as non-negative least squares (NNLS) or orthogonal matching pursuit. We use penalties related to the Hoyer measure, which is the ratio of the l1l_1 and l2l_2 norms, as sparsity penalties to be added to the objective in NNLS-type models. For solving the resulting nonconvex models, we propose a scaled gradient projection algorithm that requires solving a sequence of strongly convex quadratic programs. We discuss its close connections to convex splitting methods and difference of convex programming. We also present promising numerical results for example DOAS analysis and hyperspectral demixing problems.Comment: 38 pages, 14 figure

    Partially Coherent Ptychography by Gradient Decomposition of the Probe

    Full text link
    Coherent ptychographic imaging experiments often discard over 99.9 % of the flux from a light source to define the coherence of an illumination. Even when coherent flux is sufficient, the stability required during an exposure is another important limiting factor. Partial coherence analysis can considerably reduce these limitations. A partially coherent illumination can often be written as the superposition of a single coherent illumination convolved with a separable translational kernel. In this paper we propose the Gradient Decomposition of the Probe (GDP), a model that exploits translational kernel separability, coupling the variances of the kernel with the transverse coherence. We describe an efficient first-order splitting algorithm GDP-ADMM to solve the proposed nonlinear optimization problem. Numerical experiments demonstrate the effectiveness of the proposed method with Gaussian and binary kernel functions in fly-scan measurements. Remarkably, GDP-ADMM produces satisfactory results even when the ratio between kernel width and beam size is more than one, or when the distance between successive acquisitions is twice as large as the beam width.Comment: 11 pages, 9 figure

    Non-convex approaches for low-rank tensor completion under tubal sampling

    Full text link
    Tensor completion is an important problem in modern data analysis. In this work, we investigate a specific sampling strategy, referred to as tubal sampling. We propose two novel non-convex tensor completion frameworks that are easy to implement, named tensor L1L_1-L2L_2 (TL12) and tensor completion via CUR (TCCUR). We test the efficiency of both methods on synthetic data and a color image inpainting problem. Empirical results reveal a trade-off between the accuracy and time efficiency of these two methods in a low sampling ratio. Each of them outperforms some classical completion methods in at least one aspect

    Minimizing Quotient Regularization Model

    Full text link
    Quotient regularization models (QRMs) are a class of powerful regularization techniques that have gained considerable attention in recent years, due to their ability to handle complex and highly nonlinear data sets. However, the nonconvex nature of QRM poses a significant challenge in finding its optimal solution. We are interested in scenarios where both the numerator and the denominator of QRM are absolutely one-homogeneous functions, which is widely applicable in the fields of signal processing and image processing. In this paper, we utilize a gradient flow to minimize such QRM in combination with a quadratic data fidelity term. Our scheme involves solving a convex problem iteratively.The convergence analysis is conducted on a modified scheme in a continuous formulation, showing the convergence to a stationary point. Numerical experiments demonstrate the effectiveness of the proposed algorithm in terms of accuracy, outperforming the state-of-the-art QRM solvers.Comment: 20 page

    Improvements on Uncertainty Quantification for Node Classification via Distance-Based Regularization

    Full text link
    Deep neural networks have achieved significant success in the last decades, but they are not well-calibrated and often produce unreliable predictions. A large number of literature relies on uncertainty quantification to evaluate the reliability of a learning model, which is particularly important for applications of out-of-distribution (OOD) detection and misclassification detection. We are interested in uncertainty quantification for interdependent node-level classification. We start our analysis based on graph posterior networks (GPNs) that optimize the uncertainty cross-entropy (UCE)-based loss function. We describe the theoretical limitations of the widely-used UCE loss. To alleviate the identified drawbacks, we propose a distance-based regularization that encourages clustered OOD nodes to remain clustered in the latent space. We conduct extensive comparison experiments on eight standard datasets and demonstrate that the proposed regularization outperforms the state-of-the-art in both OOD detection and misclassification detection.Comment: Neurips 202
    • …
    corecore